DMA: Direct Memory Access.
Objective: To directly store data in the buffer of user space through disk io (or DMA) without operating system buffering. This avoids the consumption of kernel buffer memory and CPU copy (data copying from kernel space to user space.
Technical explanation:Directio Application Scenario: directio needs to read large files because DMA is initialized every time. If it is to read small files, it takes longer to initialize DMA th
directio: The SRIO Directio transfer class is similar to a memcopy transfer between the SRIO devices. One of the devices is the master, that initiates the transfer. The second Deviceis the slave that responds to the Masters requests. The slave device application normallydoes not become aware of the access (accessed by the Salve DSP does not know that the Master DSP was initiated through Srio). The master mu
the study of IO amplification in Directio
(IO test for reading a file:
In Symptom 1, the iops of the Iostat test is more than 100, and the ioPS of the test program test is 50, about half the iostat statistic. The number of IO that the Iostat tests out of ioPS minus the IOPS tested by the test program is called extra IO. Why is this. Because reading a file reads a file's block of data and the index block, iostat the IO that reads both blocks, and th
Directio:The SRIo directio transfer class is similar to a memcopy transfer between two SRIo devices. one of the devices is the master that initiates the transfer. the second deviceis the slave that responds to the masters requests. the slave device application normallydoes not become aware of the access (the accessed salve DSP does not know the access initiated by the master DSP through SRIo ). the master must know the destination memoryaddress for ru
The direct I/O (load/Store) module (that is, LSU) serves as the source of all outgoing direct I/O packets (LSU is used to configure the SRIo device that initiates data read/write, and the initiator sends the direct Io package ). with direct I/O, the
In Nginx, what scenarios are suitable for enabling sendfile, what scenarios are suitable for enabling directio in Nginx, what scenarios are suitable for enabling sendfile, And what scenarios are suitable for enabling directio?
Reply content:
In Nginx, which scenarios are suitable for enabling sendfile and direo o?
Http://nginx.org/en/docs/http/ngx_http_core_module.html#aio
Location/video/{sendfile
; chain; If (CL) {// There is an idle chain pool in the pool-> chain = Cl-> next; return Cl ;} CL = ngx_palloc (pool, sizeof (ngx_chain_t); If (CL = NULL) {return NULL;} return Cl;
}
This function determines that the chain does not need to be copied, the conditions are complex, and analysis is performed when problems occur. If 1 is returned, no replication is required. If 0 is returned, the chain needs to be copied.
Static ngx_inline ngx_int_tNgx_output_chain_as_is (ngx_output_chain_ctx_tCTX,
Fsync () to refresh the data file. The exception here is O_direct_no_fsync.
If you specify O_direct,o_direct_no_fsync, the data file is opened as O_direct (Solaris is opened with Directio (), and if the InnoDB data files are placed on a separate device, the Mount Using Forcedirectio makes the entire file system open with Directio. The reason for InnoDB instead of MySQL is that MyISAM do not use
.
In addition to O_direct_no_fsync, InnoDB uses Fsync () to refresh the data file. The exception here is O_direct_no_fsync.
If you specify O_direct,o_direct_no_fsync, the data file is opened as O_direct (Solaris is opened with Directio (), and if the InnoDB data files are placed on a separate device, the Mount Using Forcedirectio makes the entire file system open with Directio. The reason for
each server when both AIO and Sendfile be enabled on Linux, AIO was used for files that's larger than or equal to the size specified in the Directio directive, while Sendfile was used for files of smaller sizes or when D Irectio is disabled. HTTP {include mime.types; #类型 Default_type Application/octet-stream; #默认类型八进制数据流 #日志定义格式打开 log_format main ' $remote _addr-$remote _user [$time _local] "$request" ' ' $status $body _b
The storage structure of the computer is hierarchical, from fast to slow, the cost is from high to low, the capacity is from small to large, registers, L1 cache, L2 cache, until the disk, it is even slower than the disk, so when the program is running, it is inevitable that there will beCopyThis is very important. From the perspective of optimization, it is often to reduce the cost of replication. For example, if the read and write orders of disks are known, it is better to adopt
;}# # File Operation optimized configuration20. Aio on | Off | Threads[=pool];Whether the AIO feature is enabled;21, Directio Size | OffEnable the O_direct tag on the Linux host, which means that the file is greater than or equal to the given size when used, such as directio 4m;22, Open_file_cache off;Open_file_cache max=n [Inactive=time];Nginx can cache the following three kinds of information:(1) Descript
flashcache.
Flashcache. cache_all = 1: cache all content, which can be filtered by blacklist.
Flashecache. write_merge = 1: Enable write merge to improve disk write performance.
2. Percona Parameters
Innodb_page_size: If fusionio is used, the performance of 4 K is the best; If SAS disk is used, it is set to 8 K. If there are many full table scans, you can set it to 16 K. A relatively small page size can improve the cache hit rate.
Innodb_adaptive_checkpoint: If fusionio is used, set it to 3 to
find difficulty, set the level;
Client_body_temp_path/var/tmp/client_body 2 2; Two 16 digits to create a level subdirectory, 2 16 to create 2-level subdirectories; client_max_body_size= #M//user upload file size (also need to change PHP.) INI parameter)
configuration that restricts client requestsLimit_excpet method {...}//access control for methods other than the specified range;
Cases:
Limit_except get { //specified method
allow 172.16.0.0/16; You can use host deny all for all me
Test throughput:The test system throughput is not critical, and the most important is the impact before and after the use of DRBD performance. This test uses a m block to the drbd device for comparative testing.#! /Bin/bashResource = r0Test_device = 'drbdadm sh-dev $ resource'Test_ll_device = 'drbdadm sh-ll-dev $ resource'Drbdadm primary $ resourceFor I in $ (seq 5)DoDd if =/dev/zero of = $ test_device bs = 512 M count = 1 oflag = directDoneDrbdadm down $ resourceFor I in $ (seq 5)DoDd if =/dev/
(the disk_asynch_io parameter is true ), however, because async I/O (no/dev/async file) is configured on the OS layer, a trace file is generated, the error message "File '/dev/async' not present: errno = 2" is returned.
There are two solutions:
1. Create/dev/async
Create '/dev/async' to suppress theseerrors. To create '/dev/async', you need to do the following as root:
/Sbin/mknod/dev/async C 101 0x0
/Usr/bin/chown ORACLE: DBA/dev/async
/Usr/bin/chmod 000/dev/async
Or
2. Disable async I/O at
be used. This results in the inability to use the VM's disk cache.2. Files can use AIO only if the size and directio_alignment define the integer times of the block size, and the parts that cannot be rounded are read in blocking mode before and after the file blocks. That's why we need output-buffer. The directio_alignment size depends on the file system you use, the default is 512, and for XFS, note that if you do not modify the XFS bsize, you need to adjust to the XFS default 4k.
The configu
is to copy files from the/dev/urandom to MTD1 this partition, each read and write the amount of data is 4,096 bytes, copy 1000 times, so the total amount of data is 4M.)Read operationsDD IF=/DEV/MTD1 of=/dev/nullbs=4096 count=1000(The above command is to copy files from the partition MTD1 to the empty device/dev/null, each read and write data is 4,096 bytes, copy 1000 times, so the total amount of data is 4M.)The throughput rate that the DD will output when it finishes execution.(ii)
the background, and I/O operations and applications can run at the same time, improving the system performance. Using asynchronous I/O will increase the I/O traffic, this advantage is even more obvious if applications operate on bare devices. Therefore, applications such as databases and file servers often use asynchronous I/O to execute multiple I/O operations at the same time.
Oracle does not use Asynchronous IO by default. You can view the filesystemio_options parameter (default value: none)
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.